专利摘要:
A system for simultaneous acquisition of color and near-infrared images comprising a single matrix sensor comprising a first, second and third type of pixels sensitive to respective visible colors and a fourth type of panchromatic pixel, these pixels being also sensitive in the near infrared; and a signal processing circuit configured to: reconstruct a first set of monochromatic images from signals generated by the first, second and third type pixels, respectively; reconstructing a panchromatic image from signals generated by the pixels of the fourth type; reconstructing a second set of monochromatic images from signals generated by first, second and third type pixels, and said panchromatic image; reconstructing a color image from the images of the first set and the panchromatic image, and reconstructing at least one near infrared image from the images of the second set and the panchromatic image. Visible bi-spectral camera - near infrared comprising such an acquisition system and method implemented by means of such a camera.
公开号:FR3045263A1
申请号:FR1502572
申请日:2015-12-11
公开日:2017-06-16
发明作者:Raphael Horak;Yves Courcol;Ludovic Perruchot
申请人:Thales SA;
IPC主号:
专利说明:

En d’autres termes, la luminance de chaque pixel coloré à reconstruire est déterminée à partir de celle du pixel coloré immédiatement précédent, dans l’ordre de reconstruction, en y appliquant le taux de variation locale mesuré sur l’image panchromatique.
Si la luminance doit être une valeur entière, le résultat calculé est arrondi.
Si la zone de l’image panchromatique n’est pas considérée uniforme (|Ms - M-i|> Th), alors on peut reconstruire directement les pixels colorés C2 - C4 par application d’une « loi monochrome », c'est-à-dire de la fonction affine exprimant C, en fonction de Mj (i=1 - 5) et telle que les valeurs calculées de Ci et de C5 coïncident avec les valeurs mesurées :
avec
et
Là encore, si la luminance doit être une valeur entière, le résultat calculé est arrondi.
La reconstruction par application directe de la loi monochrome peut conduire à une dynamique trop grande de la composante colorée reconstruite, ou bien à sa saturation. Dans ce cas, il peut être opportun de revenir à une reconstruction de proche en proche. Par exemple, une condition de dynamique trop grande peut être constatée lorsque
où Th1 est un seuil, généralement différent de Th.
Une saturation peut être constatée si min(Ci)<0 ou si Max(Ci) est supérieur à une valeur maximale admissible (65535 si on considère le cas d’une luminance exprimée par un nombre entier codé sur 16 bits).
Bien entendu, la configuration des figures 5A et 5B - qui implique une reconstruction des composantes colorées par segments de 5 pixels - est donnée uniquement à titre d’exemple et la généralisation de la méthode à d’autres configurations ne pose pas de difficulté significative.
Des variantes de cette méthode sont possibles. Par exemple une autre approche pour déterminer la luminance des pixels colorés en s’aidant à la fois de pixels colorés proches (situés à une certaine distance dépendant du motif des pixels colorés en question) et des pixels panchromatiques voisins reconstitués, consiste à utiliser des fonctions non-linéaires qui approchent la distribution des pixels colorés en se servant, par exemple, d’une approximation polynomiale (et plus généralement d’une approximation d’une fonction non-linéaire de surface spatiale) des pixels panchromatiques avoisinants. L’avantage de ces fonctions non-linéaires mono-axes ou au contraire bi-axes surfaciques est qu’elles prennent en compte la distribution des pixels colorés à une échelle plus grande que les pixels colorés les plus proches du pixel que l’on veut reconstruire. Dans ce cadre d’idées, on peut aussi utiliser des fonctions plus générales de diffusion de valeurs qui exploitent les gradients locaux et les sauts brusques apparaissant dans la valeur de luminance des pixels panchromatiques. Quelle que soit la méthode utilisée, le principe reste le même : exploiter les pixels panchromatiques qui sont en plus grand nombre que les pixels colorés, et la loi de variation de ceux-ci pour reconstruire les pixels colorés.
Alors que la méthode de la loi monochrome implique une approche à 2 passes successives mono-axes, l’utilisation de fonctions surfaciques ou des équations de diffusion permet de procéder par une approche monopasse pour reconstruire les pixels colorés. A ce stade du traitement on dispose d’une image panchromatique pleine bande, IMPb, et de deux ensembles de trois images monochromatiques pleine bande (IRpb, IVpb, IBPb) et (IR*Pb. IV*Pb. IB*Pb). Comme cela a été évoqué plus haut, aucune de ces images n’est directement exploitable. Toutefois, une image couleur en lumière visible Ivis peut être obtenue en combinant les images pleine bande du premier ensemble (IRPb, IVPB, IBPb) et l’image panchromatique pleine bande IMPb par l’intermédiaire d’une matrice de colorimétrie 3x4, MCoh. Plus précisément, la composante rouge IR de l’image visible Ivis est donnée par une combinaison linéaire de IRpb, IVpb, IBpb et IMPb avec des coefficients an, ai2, ai3 et au- De même la composante verte IV est donnée par une combinaison linéaire de IRPb, IVpb, IBPb et IMPb avec des coefficients a2-i, a22, a23 et a24, et la composante bleue IB est donnée par une combinaison linéaire de IMpb, IVPb, IBPb et IRpb avec des coefficients a3i, a32, a33 et a34. Cela est illustré par la figure 6A.
Ensuite, l’image visible lVis peut être améliorée par une opération de balance des blancs, conventionnelle, pour tenir compte de la différence d’éclairement de la scène par rapport à celle ayant servi à établir les coefficients de la matrice de colorimétrie.
De même, une image dans le proche infrarouge IPir peut être obtenue en combinant les images pleine bande du deuxième ensemble (IRpb IVPb*, IBPb ) et l’image panchromatique pleine bande IMPb par l’intermédiaire d’une deuxième matrice de colorimétrie 1x4, MCol2. En d’autres termes l’image dans le proche infrarouge IPir est donnée par une combinaison linéaire de IVPB IBpb*, IRpb* et IRPB avec des coefficients a4i, a42, a43 et a44-Cela est illustré par la figure 6B.
Si plusieurs types de pixels présentent des sensibilités spectrales différentes dans le proche infrarouge, il est possible d’obtenir une pluralité d’images dans le proche infrarouge différentes, correspondant à NPir sous-bandes spectrales différentes (avec Npir>1). Dans ce cas, la deuxième matrice de colorimétrie MCol2 devient une matrice Npirx(NPir+3), NPir étant le nombre d’images dans le proche infrarouge que l’on souhaite obtenir. Le cas traité précédemment est le cas particulier où NPir=1 .
Ensuite, l’image dans le proche infrarouge Ipir peut être améliorée par une opération de filtrage spatial, conventionnelle. Cette opération peut être par exemple une opération de rehaussement de contours associée ou non à un filtrage adaptatif du bruit (parmi les techniques possibles de rehaussement de contours, on peut citer l’opération consistant à passer sur l’image un filtre de convolution passe-haut).
Les matrices MCol1 et MCol2 sont en fait des sous-matrices d’une même matrice de colorimétrie « A », de dimensions 4x4 dans le cas particulier où Npir=1 et où il y a 3 types de pixels colorés, qui n’est pas utilisée en tant que telle.
La taille des matrices de colorimétrie doit être modifiée si le capteur matriciel présente plus de trois types différents de pixels colorés. A titre d’exemple, comme pour les Npir pixels ayant des sensibilités spectrales différentes dans l’infrarouge, il peut y avoir Nvis (avec Nvis^3) types de pixels sensibles à différentes sous-bandes dans le visible en plus des pixels non filtrés panchromatiques. Ces Nvis types de pixels peuvent par ailleurs présenter des sensibilités spectrales différentes dans le proche infrarouge pour permettre l’acquisition de Npir images PIR. En supposant qu’il n’y a pas de pixels sensibles uniquement dans l’infrarouge, la matrice de colorimétrie McoM est alors de dimension 3x(Nvis+1)·
Par ailleurs, l’image visible monochromatique Ibnl peut être obtenue en combinant les images pleine bande du deuxième ensemble (IVPb IBpb*, IRpb) et l’image panchromatique pleine bande IMpb par l’intermédiaire d’une deuxième matrice de colorimétrie 1x4, MCol3. En d’autres termes l’image Ibnl est donnée par une combinaison linéaire de IRpb , IVPb , IBpb* et IMpb avec des coefficients â41, â42,â43,â44 qui forment la dernière ligne d’une autre matrice de colorimétrie 4x4 « Â », qui n’est pas non plus utilisée en tant que telle. Cela est illustré par la figure 6C. L’image Ibnl peut à son tour être améliorée par une opération de filtrage spatial, conventionnelle.
Les matrices de colorimétrie A et À peuvent être obtenues par une méthode de calibration. Celle-ci consiste par exemple à utiliser une mire sur laquelle différentes peintures réfléchissantes dans le visible et le PIR ont été déposées, d’éclairer le dispositif par un éclairement maîtrisé et de comparer les valeurs théoriques de luminance que devraient avoir ces peintures dans le visible et le PIR avec celles mesurées, en utilisant une matrice de colorimétrie 4x4 dont on adapte au mieux les coefficients par un moindre carré. La matrice de colorimétrie peut aussi être améliorée en pondérant les couleurs que l’on veut faire ressortir de manière privilégiée ou en rajoutant des mesures faites sur des objets naturels présents dans la scène. La méthode proposée (exploitation du PIR en plus de la couleur, utilisation de matrice 4x4, utilisation de peintures différentes émettant à la fois dans le visible et le PIR) diffère des méthodes conventionnelles cantonnées à la couleur, exploitant une matrice 3x3 et une mire conventionnelle telle que le « X-Rite checkerboard », ou la matrice de Macbeth.
Les figures 7A et 7B illustrent une possible amélioration de l’invention, mettant en œuvre un micro-balayage du capteur. Le microbalayage consiste en un déplacement périodique (oscillation) du capteur matriciel dans le plan image, ou de l’image par rapport au capteur. Ce déplacement peut être obtenu grâce à un actionneur piézoélectrique ou de type moteur à courant continu, agissant sur le capteur matriciel ou sur au moins un élément optique de formation d’image. Plusieurs (généralement deux) acquisitions d’images sont effectuées au cours de ce déplacement, ce qui permet d’améliorer l’échantillonnage spatial et donc de faciliter la reconstruction des images. Bien entendu, cela nécessite une cadence d’acquisition plus élevée que si on n’avait pas recours au micro-balayage.
Dans l’exemple de la figure 7A, le capteur matriciel CM présente une structure particulière ; une colonne sur deux est constituée de pixels panchromatiques PM, une sur quatre, d’une alternance de pixels verts PV et bleus PB, et une sur quatre d’une alternance de pixels rouges PR et verts PV. Le micro-balayage s’effectue au moyen d’une oscillation d’amplitude égale à la largeur d’un pixel, dans une direction perpendiculaire à celle des colonnes. On peut voir que la place occupée par une ligne « panchromatique » lorsque le capteur se trouve à une extrémité de son déplacement est occupée par une ligne « colorée » lorsqu’il se trouve à l’extrémité opposée, et réciproquement. La cadence d’acquisition d’image est plus élevée que celle réalisée sans microbalayage (par exemple à fréquence double de la fréquence obtenue sans micro-balayage) ceci pour pouvoir rajouter des informations aux images acquises sans micro-balayage.
En prenant l’exemple d’une fréquence d’acquisition double, à partir de deux images acquises en correspondance de deux positions extrêmes opposées du capteur (partie gauche de la figure 7A) il est possible de reconstruire (partie droite de la figure) : une image en couleurs formée par la répétition d’un motif de quatre pixels - deux verts agencés selon une diagonale, un bleu et un rouge (matrice dite « de Bayer »), formée par des pixels reconstruits ayant une forme allongée dans la direction du déplacement avec un rapport de forme de 2 ; et une image panchromatique « pleine », directement exploitable sans besoin d’interpolation ; ces deux images reconstruites étant acquises à une cadence deux fois moindre que la fréquence d’acquisition.
En effet, le micro-balayage complète les informations de pixels panchromatiques et colorés, et les traitements présentés dans le cadre de la présente invention peuvent directement s’appliquer sur les motifs engendrés à partir du détecteur avant micro-balayage et des motifs supplémentaires obtenus après sous-balayage, il n’est donc pas indispensable d’utiliser un motif spécifique comme présenté en figure 7A, celui-ci pouvant cependant s’appliquer avec des algorithmes simplifiés. Par ailleurs, le micro-balayage peut s’effectuer selon deux axes et/ou avoir une amplitude supérieure à la largeur d’un pixel. A titre d’exemple, la figure 7B illustre l’application du microbalayage au capteur CM de la figure 3A. Dans cet exemple de configuration éparse, de type 1/8, le micro-balayage implique le recouvrement de tous les pixels panchromatiques, l’ajout de 2 sous-motifs bleus, 2 sous-motifs rouges et 4 sous-motifs verts qui se superposent aux endroits préalablement panchromatiques et qui permettent au total d’appliquer les traitements sur 2 fois plus de sous-motifs qu’initialement. Dans des configurations éparses plus complexes (1/N avec N>=16), on peut généraliser le micro-balayage à M positions suivant les deux dimensions de la matrice simultanément, et gagner un facteur M sur le nombre de sous-motifs.
Jusqu’à présent, seul le cas d’un capteur matriciel comprenant exactement quatre types de pixels - rouges, verts, bleus et panchromatiques - a été considéré, mais il ne s’agit pas là d’une limitation essentielle. Il est possible d’utiliser trois types de pixels colorés, voire davantage, présentant des courbes de sensibilité différentes de celles illustrées sur la figure 1B. En outre, il est possible d’utiliser un cinquième type de pixel, sensible uniquement au rayonnement dans le proche infrarouge. A titre d’exemple, la figure 8 représente un capteur matriciel dans lequel un pixel sur quatre est panchromatique (référence PM), un pixel sur quatre est sensible uniquement au proche infrarouge (PI), un pixel sur quatre est vert (PV), un pixel sur huit est rouge (PR) et un pixel sur huit est bleu (PB).
Les signaux issus des pixels du cinquième type peuvent être utilisés de différentes façons. Par exemple il est possible de reconstruire, par la méthode « intra-canal » des figures 4A - 4D, une image dans le proche infrarouge, désignée par exemple par lp!RD (l’exposant « D » signifiant qu’il s’agit d’une image acquise « directement »), qui peut être moyennée avec l’image Ipir obtenue par application de la matrice de colorimétrie MCol2 aux images IRpb*, IVpb*, IBpb* et IMPb. Il est également possible d’utiliser une matrice de colorimétrie MCol2 de dimensions 1x5 et d’obtenir l’image dans le proche infrarouge Ipir par une combinaison linéaire de IRpb , IVpb , IBpb , IMpb et Ipird avec des coefficients matriciels a4i, a42, a44 et a45· L’image Ipird peut aussi être utilisée pour calibrer la matrice de colorimétrie MCoM qui contient alors une colonne de plus et devient de taille 3x(Nvis+2) : chaque composante rouge, verte, bleue reproduite s’exprime alors en fonction des Nvis plans reproduits en pleine bande, du plan panchromatique reconstruit et du plan PIR reconstruit à partir des pixels Ipird .
La figure 9A est une image composite d’une scène, observée en lumière visible. La partie de gauche de l’image, Ivis. a été obtenue conformément à l’invention, en utilisant le capteur matriciel « épars » de la figure 3. La partie droite, l’vis a été obtenue par une méthode classique, utilisant un capteur non épars. La qualité des deux images est comparable.
La figure 9B est une image composite de la même scène, observée dans le proche infrarouge. La partie de gauche de l’image, lP|R, a été obtenue conformément à l’invention, en utilisant le capteur matriciel « épars » de la figure 3. La partie droite, I’pir a été obtenue en utilisant une caméra PIR avec un capteur non épars. Les images sont de qualité comparable, mais l’image Ipir obtenue conformément à l’invention est plus lumineuse grâce à l’utilisation d’un capteur épars.
SYSTEM AND METHOD FOR ACQUIRING VISIBLE AND NEAR-INFRARED IMAGES USING A MATRIX SENSOR
The invention relates to a system for acquiring visible and near-infrared images, on a visible dual-spectral camera - near infrared including such a system and on a method for simultaneously acquiring color images and in the near infrared by using such a camera.
The "near infrared"("NIR" or "NIR") corresponds to the spectral band 700 - 1100 nm, while the visible light extends between 350 and 700 nm. Sometimes it is considered that the near infrared starts at 800 nm, the intermediate band 700 - 800 nm being removed using an optical filter. The invention can be applied both in the defense and security sectors (for example for night vision) and in consumer electronics.
Conventionally, visible light images (hereinafter referred to as "visible images" for the sake of brevity), generally in color, and near-infrared images are acquired independently by means of two separate matrix sensors. In order to reduce the bulk, these two sensors can be associated with a single image-forming optical system via a dichroic splitter plate, so as to form a dual-spectral camera.
Such a configuration has a number of disadvantages. First, the use of two independent sensors and a splitter blade increases the cost, size, power consumption and weight of a dual-spectral camera, which is especially problematic in embedded applications, for example airborne . In addition, the optical system must be specially adapted for this application, which limits the possibilities of using commercial optics, further increasing costs.
It has also been proposed, mainly in academic work, to use a single matrix sensor for the acquisition of visible images and in the near infrared. Indeed, the silicon sensors commonly used in digital cameras have a sensitivity that extends from the visible to the near infrared; as a result, the cameras intended to operate only in the visible are equipped with an optical filter intended to avoid a pollution of the image by the infrared component.
The documents: D. Kiku et al. "Simultaneously Capturing of RGB and Additional Band Images Using Hybrid Color Filter Array", Proc, of SPIE-IS & T Electronic Imaging, SPIE Vol. 9023 (2014); and US Pat. No. 8,619,143 describe matrix sensors comprising pixels of four different types: pixels sensitive to blue light, green light and red light, as in conventional "RGB" sensors, but also "gray" pixels, sensitive only to PIR radiation. Conventionally, these different types of pixels are obtained by the deposition of absorbent filters on elementary silicon sensors which are, by themselves, "panchromatic" (matrix of color filters). In general, the pigments used to make these filters are transparent in the near infrared; the images acquired by the "red", "green" and "blue" pixels are therefore affected by the infrared component of the incident light (because the optical filter used in conventional cameras is, of course, absent) and digital processing is necessary to recover colors close to reality.
The documents: Z. Sadeghipoor et al. "Designing Color Filter Arrays for the Joint Capture of Visible and Near-lnfrared Images", 16th IEEE Conference on Image Processing (2009); Z. Sadeghipoor et al. "Correlation-Based Joint Acquisition and Demosaicing of Visible and Near-Infrared Images", 18th IEEE Conference on Image Processing (2011) describe sensors with a more complex color filter matrix, designed to optimize the reconstruction of visible and infrared images.
Conversely, Z. Sadeghipoor et al. "A Novel Compressive Sensing Approach to Simultaneously Acquiring Color and Near Infrared Images on a Single Sensor" Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, Canada (2013) describes a sensor whose color filter matrix is close to, but slightly different from, the so-called "Bayer" matrix, which is the most commonly used in color cameras. A conventional Bayer matrix would not separate the visible and infrared components.
These approaches use sensors in which all pixels are equipped with a spectral filter. However, in order to produce high sensitivity cameras, it is advantageous to use sensors also comprising panchromatic pixels, without filters. In some cases, even "sparse" sensors with a high percentage of panchromatic pixels are used to capture most of the incident radiation. These sensors exploit the fact that the chrominance of an image can be subsampled with respect to its luminance without an observer perceiving a significant degradation of its quality.
The document: D. Hertel et al. A Low-Cost VIS-NIR True Color Night Vision System Based on a Wide Dynamic Range CMOS Imager, IEEE Intelligent Vehicles Symposium, 2009, pages 273-278; describes the use of a sensor comprising colored pixels and panchromatic pixels for the simultaneous acquisition of visible and near-infrared images. The construction method of the PIR images is not explained and no example of such images is shown; only "full-band" monochromatic images are shown that can be used in low light conditions. Moreover, this article only concerns the case of a "RGBM" sensor, which contains only 25% of panchromatic pixels, which greatly limits the gain in sensitivity that can be achieved. The invention aims to overcome the aforementioned drawbacks of the prior art. More specifically, it aims to provide a simultaneous acquisition system of visible images and PIR with high sensitivity and to obtain high quality images. Preferably, the invention allows the use of matrix sensors and commercial optical systems ("off-the-shelf" components or "COTS").
According to the invention, a high sensitivity is obtained by using sensors comprising both colored pixels (comprising or not "gray" pixels sensitive to the PIR) and panchromatic pixels in a relatively large number (preferably greater than in number to a quarter of the pixels), and preferably scattered sensors; the high image quality is achieved by the implementation of an innovative digital processing. More precisely, this treatment involves the reconstruction of a panchromatic image and two visible images "intermediate" in color. These two intermediate visible images are obtained by means of two different treatments: one, which can be described as "intra-channel", uses only the signals from the colored pixels while the other, which can be described as "Inter-channel", also exploits the panchromatic image. The PIR image is obtained from the "inter-channel" color image and the panchromatic image, while the color image is obtained by combining the "intra-channel" intermediate image with the image. panchromatic.
An object of the invention is therefore an image acquisition system comprising: a matrix sensor comprising a two-dimensional pixel arrangement, each pixel being adapted to generate an electrical signal representative of the light intensity at a point of an image optical of a scene; and a signal processing circuit configured to process the electrical signals generated by said pixels so as to generate digital images of said scene; wherein said array sensor comprises a two-dimensional array of so-called colored pixels of at least a first type responsive to visible light in a first spectral range; a second type, sensitive to visible light in a second spectral range different from the first; and a third type, sensitive to visible light in a third spectral range different from the first and the second, a combination of the spectral ranges of the different types of colored pixels reconstituting the entire visible spectrum; and pixels, said panchromatic, sensitive to the entire visible spectrum, at least the panchromatic pixels being also sensitive to the near infrared; characterized in that said signal processing circuit is configured to: reconstruct a first set of monochromatic images from the electrical signals generated by the colored pixels; reconstructing a panchromatic image from the electrical signals generated by the panchromatic pixels; reconstructing a second set of monochromatic images from the electrical signals generated by the colored pixels, and said panchromatic image; reconstructing a color image by applying a first colorimetric matrix to the monochromatic images of the first set and to said panchromatic image; reconstructing at least one near infrared image by applying a second colorimetric matrix at least to the monochromatic images of the second set and to said panchromatic image; and outputting said color image and said or at least one said image in the near infrared.
According to particular embodiments of the invention: said colored pixels may comprise only the pixels of said first, second and third types, which are also sensitive to the near infrared. More particularly, the pixels of one of the first, second and third types may be sensitive to green light, those of another of the first, second and third types may be sensitive to blue light and those of type remaining among the first, second and third type be sensitive to red light.
The said matrix sensor may be of sparse type, more than a quarter and preferably at least half of its pixels being panchromatic.
Said signal processing circuit may be configured to reconstruct the monochromatic images of said first set by applying a method comprising the following steps: determining the luminous intensity associated with each pixel of said first type and reconstructing a first monochromatic image of said first set by interpolation of said light intensities; determining the luminous intensity associated with each colored pixel of the other types, and subtracting a value representative of the intensity associated with a corresponding pixel of said first monochromatic image; reconstructing new monochromatic images by interpolating the luminous intensity values of the respective color pixels of said other types, to which said values representative of the intensity associated with a corresponding pixel of said first monochromatic image have been subtracted, and then combining these new reconstructed images with said first monochromatic image to obtain respective final monochromatic images of said first set.
The signal processing circuit may be configured to reconstruct said panchromatic image by interpolating electrical signals generated by the panchromatic pixels.
Said signal processing circuit may be configured to reconstruct the monochromatic images of said second set by calculating the luminance level of each pixel of each said image by applying a linear function, defined locally, to the luminance of the corresponding pixel in the panchromatic image.
Alternatively, said signal processing circuit may be configured to reconstruct the monochromatic images of said second set by calculating the luminance level of each pixel of each said image by means of a non-linear function of the luminance levels of a plurality. of pixels of the panchromatic image in a neighborhood of the panchromatic image pixel corresponding to said pixel of said second set image and / or the luminous intensity of a plurality of colored pixels.
Said matrix sensor may consist of a periodic repetition of blocks containing pseudorandom distributions of pixels of the different types and wherein said signal processing circuit is configured to: extract regular patterns of pixels of the same types from said matrix sensor; and reconstructing said first set of monochromatic images by parallel processing said regular patterns of pixels of the same types.
Said signal processing circuit may also be configured to reconstruct a low luminance monochromatic image by applying a third colorimetric matrix at least to the monochromatic images of the second set and to said panchromatic image.
Said matrix sensor may also comprise a two-dimensional arrangement of pixels only sensitive to the near infrared, and wherein said signal processing circuit is configured to reconstruct said near-infrared image also from the electrical signals generated by these pixels.
The system may also include an actuator for producing a relative periodic displacement between the array sensor and the optical image, the array sensor being adapted to reconstruct said first and second sets of monochromatic images and said panchromatic image from electrical signals generated by the pixels of the matrix sensor corresponding to a plurality of relative relative positions of the matrix sensor and the optical image.
Said signal processing circuit can be realized from a programmable logic circuit.
Another object of the invention is a visible-near-infrared bi-spectral camera comprising such an image acquisition system and an optical system adapted to form an optical image of a scene on a matrix sensor of the acquisition system. images, without near-infrared filtering.
Yet another object of the invention is a method of simultaneous acquisition of color images and in the near infrared by using such a bi-spectral camera. Other characteristics, details and advantages of the invention will emerge on reading the description given with reference to the accompanying drawings given by way of example and which represent, respectively:
Figure 1A, a block diagram of a camera according to an embodiment of the invention;
FIG. 1B, graphs illustrating the spectral response of the pixels of the matrix sensor of the camera of FIG. 1A;
FIG. 2, a block diagram of the processing implemented by the processor of the camera of FIG. 1A according to one embodiment of the invention;
FIGS. 3A to 6C, diagrams illustrating different stages of the processing of FIG. 2;
FIGS. 7A and 7B, two diagrams illustrating two variants of a processing implemented in a particular embodiment of the invention;
FIG. 8, a matrix sensor of an image acquisition system according to another variant of the invention;
Figures 9A and 9B, images illustrating technical results of the invention.
FIG. 1A shows the very simplified functional diagram of a visible-near-infrared CBS dual-spectral camera according to one embodiment of the invention. The CBS camera comprises an optical SO system, generally based on lenses, which forms an optical image IO of an observed scene SO. The IO image is formed on the surface of a matrix sensor CM, comprising a two-dimensional array of pixels; each pixel produces an electrical signal representative of the luminous intensity of the corresponding point (in fact of a small region) of the optical image, weighted by its spectral sensitivity curve. These electrical signals, usually after being converted to digital format, are processed by a CTS processing circuit that outputs digital images at its output: a first color image (consisting of three monochromatic images of different colors) Ivis and a second image , monochromatic, in the near infrared. Optionally, the CTS processing circuit can also output a visible monochromatic image lBNL, useful in particular when the observed scene has a low level of brightness. The matrix sensor CM and the processing circuit CTS constitute what will be called later an image acquisition system. The CTS circuit may be a microprocessor, preferably specialized for the digital signal processor (DSP) programmed in a timely manner, or a dedicated digital circuit, made for example from a programmable logic circuit such as an FPGA; however, it may also be a specific integrated circuit (ASIC) of the "Application Specifies Integrated Circuit".
The matrix sensor may be of the CCD or CMOS type; in the latter case, it can integrate an analog - digital converter so as to directly supply digital signals at its output. In any case, it comprises at least four different types of pixels: first three types sensitive to spectral bands corresponding to colors which, when mixed, restore the white of the visible spectral band (typically red, green and blue) and a fourth type "panchromatic". In a preferred embodiment, all these types of pixels also have a non-zero sensitivity in the near infrared - which is the case of silicon sensors. This sensitivity in the near infrared is generally considered a nuisance, and suppressed with the aid of an optical filter, but is exploited by the invention. Advantageously, the pixels all have the same structure, and differ only by a filtering coating on their surface (absent in the case of panchromatic pixels), generally based on polymers. Figure 1B shows the sensitivity curves for red (Rpb), green (VPB), blue (Bpb) and panchromatic (Mpb) pixels. Note that the sensitivity in the near infrared is substantially the same for the four types of pixels. The index "PB" means "full band" to indicate that the infrared component of the incident radiation has not been filtered.
As will be explained in detail below, with reference to FIG. 8, the sensor may also comprise pixels of a fifth type, sensitive only to the near infrared.
The figure identifies the spectral ranges visible (350 - 700 nm) VIS and near infrared (800 - 1100) PIR. The intermediate band (700 - 800 nm) can be filtered, but this is not advantageous in the case of the invention; more usefully, it can be considered as near infrared.
Advantageously, the matrix sensor CM may be "scattered", which means that the panchromatic pixels are at least as numerous, and preferably more numerous, than those of each of the three colors. Advantageously, at least half of the pixels are panchromatic. This improves the sensitivity of the sensor because the panchromatic pixels, having no filter, receive more light than the colored pixels. The arrangement of the pixels may be pseudo-random, but is preferably regular (i.e. periodic in both spatial dimensions) to facilitate image processing operations. It can in particular be a periodicity on a random pattern, that is to say a periodic repetition of blocks in which the pixels are distributed in a pseudo-random manner. By way of example, the left-hand part of FIG. 3A shows the diagram of a matrix sensor in which one pixel in two is of the panchromatic (PM) type, one in four is green (PV), one in eight is red ( PR) and one in eight is blue (PB). The fact that the green pixels are more numerous than the reds or the blues reflects the fact that the human eye presents a peak of sensitivity for this color.
It is also possible to use a sensor obtained by the regular repetition of a block of dimensions MxN containing a distribution of pseudo-random colored and panchromatic pixels (but with a controlled distribution between these different types of pixels).
For example, FIG. 3B shows a sensor scheme, called "1/16"("1 / N" meaning that a pixel on N is blue or red), for which we distinguish patterns of 4x4 repeated pixels, in which are distributed randomly 12 panchromatic pixels, 1 red type pixel, 1 blue type pixel and 2 green type pixels. It is well understood that several configurations of random distribution patterns are possible whether in Figure 3A (scattered of type "1/8") or 3B (scattered of type "1/16"). Some patterns can be chosen from preferentially for their performances obtained on the output images with the treatments which will be described later.
It is also possible to use more than three types of colored pixels, having different sensitivity bands, in order to obtain a plurality of monochromatic (and, where appropriate, near-infrared) images corresponding to these bands. It is thus possible to obtain hyperspectral images.
Moreover, it is not essential that the colored pixels (or all of them) are sensitive to the near infrared: it can be enough that the panchromatic pixels are.
Figure 2 schematically illustrates the processing implemented by the CTS processing circuit. The different stages of this treatment will be detailed below with reference to FIGS. 3A to 6C.
The CTS circuit receives as input a set of digital signals representing the light intensity values detected by the different pixels of the matrix sensor CM. In the figure, this set of signals is designated by the expression "sparse full-band image". The first processing operation consists in extracting from this set the signals corresponding to the pixels of the different types. Considering the case of a regular arrangement of blocks of pixels MxN (M> 1 and / or N> 1), we speak of the extraction of the "motifs" Rpb (red full band), VPB (green full band), Bpb (full-band blue) and MPB (full-band panchromatic). These patterns ("patterns" in English) correspond to subsampled or "hole"images; it is therefore necessary to reconstruct complete images, sampled at the pitch of the matrix sensor.
The reconstruction of a full-band IMpb panchromatic image is the simplest operation, especially when the panchromatic pixels are the most numerous. This is the case of Figure 3, where one pixel in two is panchromatic. The luminous intensity corresponding to the "missing" pixels (that is to say blue, green or red and therefore not directly usable to reconstruct the panchromatic image) can be calculated, for example by a simple bilinear interpolation. The method that has proved most effective is that called "medians": each "missing" pixel is assigned a luminous intensity value which is the median of the luminous intensity values measured by the panchromatic pixels constituting its more close neighbors.
The reconstruction of full-color (red, green, blue) color images is performed twice, using two different methods. A first method is called "intra-channel" because it uses only the colored pixels to reconstruct the colored images; a second method is called "inter-channel" because it also uses information from panchromatic pixels. Examples of such methods will be described later with reference to FIGS. 4A-4D (intra-channel) and 5 (inter-channel). Conventionally, only intra-channel methods are used.
Full-band images, whether obtained by an intra-channel or inter-channel method, are not directly exploitable because they are "polluted" by the PIR component (near infrared) unfiltered by the optical system. This PIR component can be eliminated by combining the full band IRPb, IVpb, IBpb images obtained by the intra-channel method with the full-band panchromatic image by means of a first colorimetric matrix (reference MCoM in FIG. 6A). Thus, a visible image in Ivis colors is obtained, formed of three monochromatic sub-images IR, IV, IB red, green and blue, respectively.
The full-band IR * pb, IV * pb, IB * pb images obtained by the inter-channel method are also combined with the full-band panchromatic image by means of a second colorimetric matrix (reference MCol2 in the figure 6B). The elements of this matrix, serving as coefficients of the combination, are chosen to allow obtaining a near-infrared image Ipir (or, more precisely, a monochromatic image, generally in black and white, representative of the luminance of image 10 in the near infrared spectral range).
Optionally, the combination of the full-band IR * pb, IV * pb, IB * pb images obtained by the inter-channel method with the full-band panchromatic image by means of a third colorimetric matrix (reference MCol3 in FIG. 6C) allows to obtain a monochromatic image Ibnl representative of the luminance of the image 10 throughout the visible spectral range, but without pollution by the infrared components. Since it uses signals from panchromatic sensors, which are more numerous and have no filter, this image can be brighter than the Ivis color image and is therefore particularly suitable for low-light conditions. It is important to note that the Ibnl image does not contain a near-infrared contribution, which is virtually numerically filtered. On the other hand, the aforementioned article by D. Hertel et al. describes obtaining a "low level of light" image that combines images in the visible and in the near infrared. Such an image is visually different from a purely visible image, as IBnl ·
In some cases, one might be interested only in the near-infrared image lP | R and possibly in the low-luminance monochromatic visible image IBnl- In these cases, it would not be necessary to implement the reconstruction method intra-channel.
An advantageous method of "intra-channel" type image reconstruction will now be described with reference to FIGS. 4A-4D. This description is given only by way of example because many other methods known in the literature (diffusion approaches, wavelet, color constancy ...) may be suitable for the implementation of the invention.
In the matrix sensor of FIG. 3A, the blue pixels are grouped into regular sub-patterns (or sub-patterns). The pattern MPB formed by these pixels can be decomposed into two subpatterns SMPB1, SMPB2 identical to one another but spatially offset; each of these sub-patterns has four blue pixels at the corners of a 5 x 5 pixel square; this is illustrated in Figure 4A. This method of decomposition into regular sub-patterns is applicable to pixels of different types regardless of the random arrangement thereof defined in a block of MxN pixels. In particular, when the processing circuit CTS is produced by means of a dedicated digital circuit, it is advantageous to break down the MPV pattern into sub-patterns that can be processed in parallel.
FIG. 4B illustrates a case where the green pixel pattern is decomposed into four subpatterns SMPV1, SMPV2, SMPV3 and SMPV4. Complete green monochromatic images, IVPB1, IVPB2, IVPB3, IVPB4, are obtained by bilinear interpolation of these sub-motifs. Then a green "full band" IVPB monochromatic image is obtained by averaging them.
The reconstruction of the red and blue images in full band is a little more complex. It is based on a method similar to the "consistency of hue" described in US 4,642,678.
First, subtract the IVPB full-band green image from the red and blue pixel patterns. More specifically, it means that the signal from each red or blue pixel is subtracted from a value representative of the intensity of the corresponding pixel of the green image in full band IVPB. The red pixel pattern is decomposed into two subpatterns SMPR1, SMPR2; after subtraction, the modified subunits SMPRT, SMPR2 'are obtained; similarly, the blue pixel pattern is decomposed into two subpatterns SMPB1, SMPB2; after subtraction, the modified subpatterns SMPBT, SMPB2 'are obtained. This is illustrated in Figure 4C.
Then, as illustrated in FIG. 4D, IRI'pb, IR2'pBS images are obtained by bilinear interpolation of the red sub-patterns, and averaged to provide a modified red image IRpb '. Similarly, IBI'pb, IB2'PBS images are obtained by bilinear interpolation of the red sub-patterns, and averaged to provide a modified blue image IBpb '. The IRpb full-band red image and the IBpB full-band blue image are obtained by adding the IVpb full-band green image to the modified red and blue IRpb ', IBpb' images. The advantage of proceeding in this way, by subtracting the reconstructed green image of the red and blue pixel patterns to add it at the end of the treatment, is that the modified patterns have a low intensity dynamic, which can reduce interpolation errors. The problem is less acute for green, which is sampled more finely.
The inter-channel reconstruction is performed differently. It explicitly exploits panchromatic pixels, unlike intra-channel reconstruction that only exploits red, green, and blue pixels. For example, it can be performed using an algorithm that can be described as "monochrome law", which is illustrated with the aid of Figures 5A and 5B. The idea underlying this algorithm is that the colored components - green, red and blue - generally have spatial variations that "follow" approximately those of the panchromatic component. It is therefore possible to use the panchromatic component, reconstituted by bilinear interpolation and benefiting from its denser spatial sampling, to calculate the luminance level of the missing pixels of the colored components. More particularly, the luminance level of each color pixel can be determined by applying a linear function or refining the luminance of the corresponding pixel of the panchromatic image. The linear or affine function in question is determined locally and depends on the luminance level of the colored pixels already known (because measured directly or already calculated).
Figure 5A refers to the case of a scattered sensor in which only one line in four contains blue pixels; within these lines, one pixel in four is blue. There is also a "full" panchromatic image, defined for all the pixels of the sensor, obtained in the manner described above with reference to FIG. 3. FIG. 5B shows a segment of the matrix sensor comprising two blue pixels separated by three pixels for which the blue component is not defined; superimposed on this portion of line, there is a portion of panchromatic image, defined on all the pixels. We denote by Ci and C5 the known luminances of the two blue pixels (more generally, colored) at the ends of the line segment, by C2, C3 and C4 the luminances - to be calculated - of the blue component in correspondence of three intermediate pixels, and by M1 - M5 the known luminances of the panchromatic image in correspondence of these same pixels.
The first step of the method consists of reconstructing lines of the blue component of the image using the panchromatic image; only the lines containing the blue pixels are reconstructed this way, or one line out of four. At the end of this step, there are complete blue lines, separated by lines in which the blue component is not defined. If we look at the columns, we notice that in every column one pixel in four is blue. We can reconstruct blue columns by interpolation aided by the knowledge of the panchromatic image, as we did for the lines. The same goes for the green and red components. The application of the monochrome law to reconstruct a colored component of the image takes place in the following manner.
We are interested in the pixels M-ι to M5 of the reconstituted panchromatic image which lie between two pixels Ci and C5 of the pattern of the considered color, including the extreme pixels Μ-ι, M5 which are co-located with these two colored pixels. Then it is determined whether the corresponding portion of the panchromatic image can be considered uniform. To do this, we compare the total panchromatic luminance variation between M1 and M5 at a threshold Th. Si | Ms - Mi | <Th, then the area is considered uniform, otherwise it is considered non-uniform.
If the area of the panchromatic image is considered uniform, it is checked whether the total panchromatic luminance M1 + M2 + M3 + M4 + M5 is below a threshold, a function notably of the thermal noise, in which case the panchromatic image does not contain usable information and the reconstruction of the colored component (more precisely: the calculation of the luminance of the colored pixels C2, C3 and C4) is done by linear interpolation between Ci and C5. Otherwise, we proceed to a reconstruction step by step:
In other words, the luminance of each color pixel to be reconstructed is determined from that of the immediately preceding color pixel, in the order of reconstruction, by applying the local variation rate measured on the panchromatic image.
If the luminance must be an integer value, the calculated result is rounded.
If the area of the panchromatic image is not considered uniform (| Ms - Mi |> Th), then the C2 - C4 colored pixels can be reconstructed directly by applying a "monochrome law", that is, -describe the affine function expressing C, as a function of Mj (i = 1 - 5) and such that the calculated values of Ci and C5 coincide with the measured values:
with
and
Again, if the luminance must be an integer value, the calculated result is rounded.
The reconstruction by direct application of the monochrome law can lead to a too great dynamics of the reconstructed color component, or to its saturation. In this case, it may be appropriate to return to a reconstruction step by step. For example, a dynamic condition that is too big can be seen when
where Th1 is a threshold, usually different from Th.
Saturation can be observed if min (Ci) <0 or if Max (Ci) is greater than a maximum allowable value (65535 if we consider the case of a luminance expressed by a 16-bit integer).
Of course, the configuration of FIGS. 5A and 5B - which involves a reconstruction of the colored components in segments of 5 pixels - is given solely by way of example and the generalization of the method to other configurations does not pose any significant difficulty.
Variations of this method are possible. For example, another approach for determining the luminance of the colored pixels by using both colored pixels that are close (located at a certain distance depending on the pattern of the colored pixels in question) and reconstructed neighboring panchromatic pixels, consists in using functions non-linear approaching the distribution of colored pixels using, for example, a polynomial approximation (and more generally an approximation of a non-linear spatial surface function) of neighboring panchromatic pixels. The advantage of these nonlinear mono-axis functions or on the contrary bi-axes surface is that they take into account the distribution of colored pixels on a larger scale than the nearest colored pixels of the pixel that we want rebuild. In this framework of ideas, it is also possible to use more general functions of value diffusion that exploit the local gradients and the sudden jumps appearing in the luminance value of the panchromatic pixels. Whatever the method used, the principle remains the same: to exploit the panchromatic pixels which are in greater number than the colored pixels, and the law of variation of these to reconstruct the colored pixels.
While the monochrome law method involves a two-pass single-axis approach, the use of surface functions or scattering equations allows for a single-pass approach to reconstruct the colored pixels. At this stage of the processing, a full-band panchromatic image, IMPb, and two sets of three full-band monochromatic images (IRpb, IVpb, IBPb) and (IR * Pb. IV * Pb. IB * Pb) are available. As mentioned above, none of these images are directly exploitable. However, a color image in visible light Ivis can be obtained by combining the full-band images of the first set (IRPb, IVPB, IBPb) and the full-band panchromatic image IMPb via a 3x4 colorimetric matrix, MCoh. More precisely, the IR red component of the visible Ivis image is given by a linear combination of IRpb, IVpb, IBpb and IMPb with coefficients an, ai2, ai3 and λ. Similarly, the green component IV is given by a linear combination. of IRPb, IVpb, IBPb and IMPb with coefficients a2-i, a22, a23 and a24, and the blue component IB is given by a linear combination of IMpb, IVPb, IBPb and IRpb with coefficients a3i, a32, a33 and a34 . This is illustrated in Figure 6A.
Then, the visible image lVis can be improved by a conventional white balance operation to take into account the difference in illumination of the scene with respect to that used to establish the coefficients of the colorimetric matrix.
Similarly, an IPir near-infrared image can be obtained by combining the full-band images of the second set (IRpb IVPb *, IBPb) and the full-band panchromatic image IMPb via a second 1x4 colorimetric matrix. , MCol2. In other words, the near infrared image IPir is given by a linear combination of IVPB IBpb *, IRpb * and IRPB with coefficients a4i, a42, a43 and a44-This is illustrated in Figure 6B.
If several types of pixels have different spectral sensitivities in the near infrared, it is possible to obtain a plurality of different near infrared images, corresponding to NPir different spectral subbands (with Npir> 1). In this case, the second colorimetric matrix MCol2 becomes a matrix Npirx (NPir + 3), NPir being the number of images in the near infrared that one wishes to obtain. The case previously treated is the particular case where NPir = 1.
Then, the near-infrared image Ipir can be improved by a conventional spatial filtering operation. This operation may be, for example, a contour enhancement operation associated or not with adaptive noise filtering (among the possible techniques for contour enhancement, mention may be made of the step of passing on the image a pass convolution filter. above).
The matrices MCol1 and MCol2 are in fact sub-matrices of the same colorimetric matrix "A", of dimensions 4x4 in the particular case where Npir = 1 and where there are 3 types of colored pixels, which is not used as such.
The size of the colorimetric matrices must be changed if the matrix sensor has more than three different types of colored pixels. For example, as for the Npir pixels having different spectral sensitivities in the infrared, there may be Nvis (with Nvis ^ 3) pixel types sensitive to different subbands in the visible in addition to unfiltered pixels panchromatic. These Nvis pixel types can also have different spectral sensitivities in the near infrared to allow acquisition of Npir PIR images. Assuming that there are no sensitive pixels only in the infrared, the McoM colorimetric matrix is then of dimension 3x (Nvis + 1) ·
Moreover, the monochromatic visible image Ibn1 can be obtained by combining the full-band images of the second set (IVPb IBpb *, IRpb) and the full-band panchromatic image IMpb by means of a second 1x4 colorimetric matrix, MCol3. In other words, the image Ibn1 is given by a linear combination of IRpb, IVPb, IBpb * and IMpb with coefficients 41, 42, 43, 44 which form the last line of another 4x4 colorimetric matrix. which is not used as such. This is illustrated in Figure 6C. The image Ibnl can in turn be improved by a conventional spatial filtering operation.
The colorimetry matrices A and λ can be obtained by a calibration method. This consists, for example, in using a pattern on which different reflecting paints in the visible and the PIR have been deposited, in illuminating the device with controlled illumination, and in comparing the theoretical values of luminance that these paints should have in the visible range. and the PIR with those measured using a 4x4 colorimetric matrix whose coefficients are best adapted by a smaller square. The colorimetric matrix can also be improved by weighting the colors that you want to highlight in a privileged way or by adding measurements made on natural objects present in the scene. The proposed method (exploitation of PIR in addition to color, use of 4x4 matrix, use of different paints emitting both in the visible and the PIR) differs from conventional methods confined to color, using a 3x3 matrix and a conventional test pattern such as the "X-Rite checkerboard", or the Macbeth matrix.
FIGS. 7A and 7B illustrate a possible improvement of the invention, implementing a micro-scanning of the sensor. The microscanning consists of a periodic displacement (oscillation) of the matrix sensor in the image plane, or of the image with respect to the sensor. This displacement can be obtained by means of a piezoelectric actuator or of a DC motor type, acting on the matrix sensor or on at least one optical image-forming element. Several (usually two) image acquisitions are made during this move, which improves spatial sampling and thus facilitates the reconstruction of images. Of course, this requires a higher acquisition rate than if we did not use the micro-scan.
In the example of FIG. 7A, the matrix sensor CM has a particular structure; one out of every two columns consists of panchromatic pixels PM, one out of four, of alternating green pixels PV and blue PB, and one out of four of an alternation of red pixels PR and green PV. The microscanning is effected by means of an oscillation of amplitude equal to the width of a pixel, in a direction perpendicular to that of the columns. It can be seen that the place occupied by a "panchromatic" line when the sensor is at one end of its displacement is occupied by a "colored" line when it is at the opposite end, and vice versa. The image acquisition rate is higher than that carried out without microscanning (for example at a frequency twice the frequency obtained without a micro-scan) in order to be able to add information to the images acquired without a micro-scan.
Taking the example of a double acquisition frequency, from two images acquired in correspondence of two opposite extreme positions of the sensor (left part of Figure 7A) it is possible to reconstruct (right part of the figure): a color image formed by the repetition of a pattern of four pixels - two greens arranged diagonally, blue and red (so-called "Bayer matrix"), formed by reconstructed pixels having an elongated shape in the direction of the displacement with a form ratio of 2; and a "full" panchromatic image, directly exploitable without the need for interpolation; these two reconstructed images being acquired at a rate twice less than the acquisition frequency.
Indeed, the micro-scanning completes the information of panchromatic and colored pixels, and the treatments presented in the context of the present invention can directly be applied to the patterns generated from the detector before micro-scanning and additional patterns obtained after under-scanning, it is therefore not essential to use a specific pattern as shown in Figure 7A, it can however be applied with simplified algorithms. Furthermore, the micro-scanning can be performed along two axes and / or have an amplitude greater than the width of a pixel. By way of example, FIG. 7B illustrates the application of the microscanning to the CM sensor of FIG. 3A. In this sparse configuration example, type 1/8, the micro-scan involves the recovery of all the panchromatic pixels, the addition of 2 blue sub-patterns, 2 red sub-patterns and 4 green sub-patterns that overlap. in previously panchromatic places and which allow the total to apply treatments on 2 times more sub-patterns than initially. In more complex scattered configurations (1 / N with N> = 16), we can generalize the micro-scanning at M positions according to the two dimensions of the matrix simultaneously, and gain a factor M on the number of sub-patterns.
So far, only the case of a matrix sensor with exactly four types of pixels - red, green, blue and panchromatic - has been considered, but this is not an essential limitation. It is possible to use three or more types of colored pixels having sensitivity curves different from those illustrated in FIG. 1B. In addition, it is possible to use a fifth type of pixel, sensitive only to radiation in the near infrared. By way of example, FIG. 8 represents a matrix sensor in which one pixel out of four is panchromatic (PM reference), one pixel in four is sensitive only to the near infrared (PI), one pixel in four is green (PV), one pixel in eight is red (PR) and one pixel in eight is blue (PB).
The signals from the pixels of the fifth type can be used in different ways. For example, it is possible to reconstruct, by the "intra-channel" method of FIGS. 4A-4D, a near-infrared image, designated for example by lp! RD (the exponent "D" signifying that it is of a "directly" acquired image), which can be averaged with the image Ipir obtained by applying the colorimetric matrix MCol2 to the images IRpb *, IVpb *, IBpb * and IMPb. It is also possible to use a 1x5 size MCol2 colorimetric matrix and to obtain the near-infrared image Ipir by a linear combination of IRpb, IVpb, IBpb, IMpb and Ipird with matrix coefficients a4i, a42, a44 and a45 · The Ipird image can also be used to calibrate the MCoM colorimetric matrix, which then contains one more column and becomes 3x (Nvis + 2) in size: each red, green, and blue component reproduced is then expressed as a function Nvis shots reproduced in full band, the reconstructed panchromatic plane and the reconstructed PIR plane from Ipird pixels.
Figure 9A is a composite image of a scene, seen in visible light. The left part of the picture, Ivis. was obtained according to the invention, using the "scattered" matrix sensor of Figure 3. The right part, the screw was obtained by a conventional method, using a non-scattered sensor. The quality of the two images is comparable.
Figure 9B is a composite image of the same scene, observed in the near infrared. The left part of the image, lP | R, was obtained according to the invention, using the "scattered" matrix sensor of FIG. 3. The right part, I'pir was obtained using a PIR camera with a non-scattered sensor. The images are of comparable quality, but the Ipir image obtained according to the invention is brighter thanks to the use of a scattered sensor.
权利要求:
Claims (15)
[1" id="c-fr-0001]
An image acquisition system (SAI) comprising: a matrix sensor (CM) comprising a two-dimensional array of pixels, each pixel being adapted to generate an electrical signal representative of the light intensity at a point of an optical image (IO) of a scene (SC); and a signal processing circuit (CTS) configured to process the electrical signals generated by said pixels so as to generate digital images (Ivis. Irir) of said scene; wherein said array sensor comprises a two-dimensional array: of pixels, said colored, of at least a first type (PV), responsive to visible light in a first spectral range; a second type (PB), sensitive to visible light in a second spectral range different from the first; and a third type (PR), sensitive to visible light in a third spectral range different from the first and the second, a combination of the spectral ranges of the different types of colored pixels reconstituting the entire visible spectrum; and pixels, said panchromatic, (PM) sensitive to the entire visible spectrum, at least the panchromatic pixels being also sensitive to the near infrared; characterized in that said signal processing circuit is configured to: reconstruct a first set of monochromatic images (IVPB, IBpb, IRpb) from the electrical signals generated by the colored pixels; reconstructing a panchromatic image (IMPB) from the electrical signals generated by the panchromatic pixels; reconstructing a second set of monochromatic images (IV * pb, IB * Pb, IR * pb) from the electrical signals generated by the colored pixels, and from said panchromatic image; reconstructing a color image (Ivis) by applying a first colorimetric matrix (MCol1) to the monochromatic images of the first set and to said panchromatic image; reconstructing at least one near infrared image (lP | R) by applying a second colorimetric matrix (MCol2) at least to the monochromatic images of the second set and to said panchromatic image; and outputting said color image and said or at least one said image in the near infrared.
[2" id="c-fr-0002]
An image acquisition system according to claim 1, wherein said color pixels comprise only the pixels of said first, second and third types, which are also near-infrared sensitive.
[3" id="c-fr-0003]
An image acquisition system according to claim 2, wherein the pixels of one of the first, the second and the third type are sensitive to green light, those of another of the first, the second and the third are the third type are sensitive to blue light and those of the type remaining among the first, second and third types are sensitive to red light.
[4" id="c-fr-0004]
4. An image acquisition system according to one of the preceding claims wherein said matrix sensor is of sparse type, more than a quarter and preferably at least half of its pixels being panchromatic.
[5" id="c-fr-0005]
An image acquisition system according to one of the preceding claims, wherein said signal processing circuit is configured to reconstruct the monochromatic images of said first set by applying a method comprising the steps of: determining the intensity luminous associated with each pixel of said first type and reconstruct a first monochromatic image (IVpb) of said first set by interpolation of said light intensities; determining the luminous intensity associated with each colored pixel of the other types, and subtracting a value representative of the intensity associated with a corresponding pixel of said first monochromatic image; reconstructing new monochromatic images (IB'pb, IR 'pb) by interpolating the light intensity values of the respective color pixels of said other types, from which said values representative of the intensity associated with a corresponding pixel of said first image have been subtracted; monochromatic, and then combine these new reconstructed images with said first monochromatic image (IVPB) to obtain respective final monochromatic images (IBpb, IRpb) of said first set.
[6" id="c-fr-0006]
An image acquisition system according to one of the preceding claims, wherein said signal processing circuit is configured to reconstruct said panchromatic image by interpolation of electrical signals generated by the panchromatic pixels.
[7" id="c-fr-0007]
An image acquisition system according to one of the preceding claims, wherein said signal processing circuit is configured to reconstruct the monochromatic images of said second set by calculating the luminance level of each pixel of each said image per application. a linear function, defined locally, to the luminance of the corresponding pixel in the panchromatic image.
[8" id="c-fr-0008]
An image acquisition system according to one of claims 1 to 6, wherein said signal processing circuit is configured to reconstruct the monochromatic images of said second set by calculating the luminance level of each pixel of each said image. by means of a non-linear function of the luminance levels of a plurality of pixels of the panchromatic image in a neighborhood of the panchromatic image-forming pixel corresponding to said pixel of said second-set image and / or the luminous intensity a plurality of colored pixels.
[9" id="c-fr-0009]
An image acquisition system according to one of the preceding claims, wherein said matrix sensor is constituted by a periodic repetition of blocks containing pseudo-random distributions of pixels of different types and wherein said signal processing circuit is configured to: extract regular patterns of pixels of the same types from said array sensor; and reconstructing said first set of monochromatic images by parallel processing said regular patterns of pixels of the same types.
[10" id="c-fr-0010]
The image acquisition system according to one of the preceding claims, wherein said signal processing circuit is also configured to reconstruct a low luminance monochromatic image by applying a third colorimetric matrix (MCol3). at least to the monochromatic images of the second set and to said panchromatic image.
[11" id="c-fr-0011]
The image acquisition system according to one of the preceding claims, wherein said matrix sensor (CM) also comprises a two-dimensional arrangement of pixels (PI) only sensitive to the near infrared, and wherein said signal processing circuit is configured to reconstruct said near-infrared image (IPir) also from the electrical signals generated by these pixels.
[12" id="c-fr-0012]
An image acquisition system according to one of the preceding claims, further comprising an actuator for producing a relative periodic displacement between the array sensor and the optical image, wherein the array sensor is adapted to reconstruct said first and second sets. of monochromatic images and said panchromatic image from electrical signals generated by the pixels of the array sensor in correspondence of a plurality of relative relative positions of the array sensor and the optical image.
[13" id="c-fr-0013]
An image acquisition system according to one of the preceding claims, wherein said signal processing circuit is made from a programmable logic circuit.
[14" id="c-fr-0014]
A bi-spectral visible-near infrared (CBS) camera comprising: an image acquisition system (IAS) according to one of the preceding claims; and an optical system (SO) adapted to form an optical image (IO) of a scene (SC) on a matrix sensor (SC) of such an image acquisition system, without near-infrared filtering.
[15" id="c-fr-0015]
15. A method of simultaneous acquisition of color images and in the near infrared by using a dual-spectral camera according to claim 14.
类似技术:
公开号 | 公开日 | 专利标题
EP3387824B1|2020-09-30|System and method for acquiring visible and near infrared images by means of a single matrix sensor
EP2987321B1|2018-05-30|Device for acquiring bimodal images
JP6952277B2|2021-10-20|Imaging equipment and spectroscopic system
EP2160904B1|2014-05-21|Digital image sensor, image capture and reconstruction method and system for implementing same
EP3657784A1|2020-05-27|Method for estimating a fault of an image capturing system and associated systems
FR2964490A1|2012-03-09|METHOD FOR DEMOSAICING DIGITAL RAW IMAGE, COMPUTER PROGRAM, AND CORRESPONDING IMAGING OR GRAPHICS CIRCUIT
US20140240548A1|2014-08-28|Image Processing Based on Moving Lens with Chromatic Aberration and An Image Sensor Having a Color Filter Mosaic
CA2961118A1|2016-03-31|Bimode image acquisition device with photocathode
FR3071124B1|2019-09-06|DEVICE FOR CAPTURING A HYPERSPECTRAL IMAGE
Aggarwal et al.2013|Multi-spectral demosaicing technique for single-sensor imaging
Ni et al.2018|Single-shot multispectral imager using spatially multiplexed fourier spectral filters
FR2895823A1|2007-07-06|Human, animal or object surface`s color image correcting method, involves forcing output of neuron of median layer in neural network, during propagation steps of learning phase, to luminance value of output pixel
FR2966257A1|2012-04-20|METHOD AND APPARATUS FOR CONSTRUCTING A RELIEVE IMAGE FROM TWO-DIMENSIONAL IMAGES
FR3054093A1|2018-01-19|METHOD AND DEVICE FOR DETECTING AN IMAGE SENSOR
EP3763116B1|2022-01-05|Process of reconstructing a color image acquired by an image sensor covered with a color filter mosaic
FR3054707A3|2018-02-02|METHOD FOR ACQUIRING COLOR IMAGES UNDER INCOMING AMBIENT LIGHTING
Liu et al.2017|Color demosaicking with the spatial alignment property of spectral Laplacians
Wu et al.2019|High Joint Spectral-Spatial Resolution Imaging via Nanostructured Random Broadband Filtering
FR3098962A1|2021-01-22|System for detecting a hyperspectral feature
Khademhosseinieh et al.2011|Lensless on-chip color imaging using nano-structured surfaces and compressive decoding
CA2647307A1|2007-09-27|Spectrum-forming device on an optical sensor with spatial rejection
同族专利:
公开号 | 公开日
KR20180094029A|2018-08-22|
IL259620A|2021-07-29|
CN108370422A|2018-08-03|
ES2837055T3|2021-06-29|
WO2017097857A1|2017-06-15|
US10477120B2|2019-11-12|
JP6892861B2|2021-06-23|
FR3045263B1|2017-12-08|
JP2018538753A|2018-12-27|
EP3387824B1|2020-09-30|
EP3387824A1|2018-10-17|
US20180359432A1|2018-12-13|
CN108370422B|2021-03-16|
IL259620D0|2018-07-31|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
US20090285476A1|2008-05-19|2009-11-19|Won-Hee Choe|Apparatus and method for combining images|
WO2014170359A1|2013-04-17|2014-10-23|Photonis France|Device for acquiring bimodal images|
US20150163418A1|2013-12-05|2015-06-11|Omnivision Technologies, Inc.|Image Sensors For Capturing Both Visible Light Images And Infrared Light Images, And Associated Systems And Methods|
US4642678A|1984-09-10|1987-02-10|Eastman Kodak Company|Signal processing method and apparatus for producing interpolated chrominance values in a sampled color image signal|
US7012643B2|2002-05-08|2006-03-14|Ball Aerospace & Technologies Corp.|One chip, low light level color camera|
EP1528791A1|2003-10-29|2005-05-04|Thomson Licensing S.A.|Method for colour correction of digital image data|
US8259203B2|2007-12-05|2012-09-04|Electro Scientific Industries, Inc.|Method and apparatus for achieving panchromatic response from a color-mosaic imager|
US8619143B2|2010-03-19|2013-12-31|Pixim, Inc.|Image sensor including color and infrared pixels|
CN102404581A|2011-11-02|2012-04-04|清华大学|Color image processing method and device based on interpolation and near infrared|
EP2791898A4|2011-12-02|2015-10-21|Nokia Technologies Oy|Method, apparatus and computer program product for capturing images|
EP3133812A4|2014-04-14|2017-08-16|Sharp Kabushiki Kaisha|Photo detection apparatus, solid-state image pickup apparatus, and methods for making them|
FR3026223B1|2014-09-22|2016-12-23|Photonis France|APPARATUS FOR ACQUIRING PHOTOCATHODE BIMODE IMAGES.|
CN104301634A|2014-10-24|2015-01-21|四川大学|Short wave infrared single pixel camera based on random sampling|US10740883B2|2015-12-10|2020-08-11|Qiagen Gmbh|Background compensation|
FR3071328B1|2017-09-18|2019-09-13|Thales|INFRARED IMAGER|
CN108282644B|2018-02-14|2020-01-10|北京飞识科技有限公司|Single-camera imaging method and device|
WO2021033787A1|2019-08-19|2021-02-25|한국전기연구원|Visible and near-infrared image providing system and method which use single color camera and can simultaneously acquire visible and near-infrared images|
CN110611779B|2019-09-27|2021-11-26|华南师范大学|Imaging device and imaging method for simultaneously acquiring visible light and near infrared wave bands based on single image sensor|
法律状态:
2016-11-28| PLFP| Fee payment|Year of fee payment: 2 |
2017-06-16| PLSC| Publication of the preliminary search report|Effective date: 20170616 |
2017-11-27| PLFP| Fee payment|Year of fee payment: 3 |
2019-11-28| PLFP| Fee payment|Year of fee payment: 5 |
2020-11-25| PLFP| Fee payment|Year of fee payment: 6 |
优先权:
申请号 | 申请日 | 专利标题
FR1502572A|FR3045263B1|2015-12-11|2015-12-11|SYSTEM AND METHOD FOR ACQUIRING VISIBLE AND NEAR INFRARED IMAGES USING A SINGLE MATRIX SENSOR|FR1502572A| FR3045263B1|2015-12-11|2015-12-11|SYSTEM AND METHOD FOR ACQUIRING VISIBLE AND NEAR INFRARED IMAGES USING A SINGLE MATRIX SENSOR|
CN201680072568.1A| CN108370422B|2015-12-11|2016-12-07|System and method for collecting visible and near infrared images using a single matrix sensor|
JP2018530147A| JP6892861B2|2015-12-11|2016-12-07|Systems and methods for acquiring visible and near-infrared images with a single matrix sensor|
ES16808629T| ES2837055T3|2015-12-11|2016-12-07|System and procedure for acquiring images visible in the near infrared by means of a unique matrix sensor|
EP16808629.6A| EP3387824B1|2015-12-11|2016-12-07|System and method for acquiring visible and near infrared images by means of a single matrix sensor|
KR1020187019757A| KR20180094029A|2015-12-11|2016-12-07|Systems and methods for obtaining visible and near infrared images by a single matrix sensor|
US15/779,050| US10477120B2|2015-12-11|2016-12-07|System and method for acquiring visible and near infrared images by means of a single matrix sensor|
PCT/EP2016/080139| WO2017097857A1|2015-12-11|2016-12-07|System and method for acquiring visible and near infrared images by means of a single matrix sensor|
IL259620A| IL259620A|2015-12-11|2018-05-27|System and method for acquiring visible and near infrared images by means of a single matrix sensor|
[返回顶部]